In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.
The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
# Load pickled data
import pickle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import random
signnames = pd.read_csv('signnames.csv')
training_file = './data/train.p'
validation_file='./data/valid.p'
testing_file = './data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGESComplete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
def inspectDataSetSize():
assert(len(X_train) == len(y_train))
assert(len(X_valid) == len(y_valid))
assert(len(X_test) == len(y_test))
ntrain = len(y_train)
nvalid = len(y_valid)
ntest = len(y_test)
print("Number of training examples =", ntrain)
print("Number of validation examples =", nvalid)
print("Number of testing examples =", ntest)
return ntrain, nvalid, ntest
def inspectImageSize():
image_shape_test = X_test.shape[1:]
image_shape_train = X_train.shape[1:]
image_shape_valid = X_valid.shape[1:]
print("Train image data shape =", image_shape_train)
print("Validation image data shape =", image_shape_valid)
print("Test image data shape =", image_shape_test)
def inspectClasses():
print("Number of train classes =", np.unique(y_train).size)
print("Number of test classes =", np.unique(y_test).size)
print("Number of valid classes =", np.unique(y_valid).size)
return np.unique(y_train).size, np.unique(y_train)
ntrain, nvalid, ntest = inspectDataSetSize()
nclasses, classes = inspectClasses()
inspectImageSize()
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
# Visualizations will be shown in the notebook.
%matplotlib inline
def inspectClassesBinCount():
pltChangeSize()
plt.hist(y_train, bins=nclasses, label='train')
plt.hist(y_test, bins=nclasses, label='test')
plt.hist(y_valid, bins=nclasses, label='valid')
plt.title('Image classes Histogram')
plt.xlabel('Classes')
plt.ylabel('Counts')
plt.legend()
def pltChangeSize():
fig_size = plt.rcParams["figure.figsize"]
# Set figure width to 12 and height to 9
fig_size[0] = 12
fig_size[1] = 9
plt.rcParams["figure.figsize"] = fig_size
print("classes: \n", classes)
inspectClassesBinCount()
def plotImagesForClasses():
for cls in classes:
fig, axes = plt.subplots(ncols=12, figsize=(12, 1))
signName = signnames[signnames['ClassId'] == cls]
print(signName.values)
idxs = np.where(y_train==cls)[0]
i = 0
for i in range(12):
idx = random.choice(idxs)
axes[i].set_xticks([])
axes[i].set_yticks([])
axes[i].set_title(idx)
axes[i].imshow(X_train[idx])
plt.show()
plotImagesForClasses()
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
There are various aspects to consider when thinking about this problem:
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
bincount = np.bincount(y_train)
print('Bincount: \n', bincount)
roundup = (2600 / bincount).astype('int')
print('Weak classes: \n', np.where(roundup >1))
print('Boosting factors: \n', repr(roundup[np.where(roundup >1)] -1 ))
### Augmentation using Keras datagenerator
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
import cv2
import math
datagen = ImageDataGenerator(
rotation_range=10,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
zoom_range=0.1,
fill_mode='nearest',
dim_ordering='tf')
weakClasses = [ 0, 3, 6, 7, 8, 11, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24,
26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 39, 40, 41, 42]
augmentfactors = [13, 1, 6, 1, 1, 1, 2, 3, 6, 1, 1, 13, 7, 8, 6, 4, 9,
3, 11, 4, 9, 5, 2, 11, 3, 6, 1, 6, 13, 8, 7, 11, 11]
#for batch in datagen.flow(x, batch_size=1):
def augmentWeakClasses(X_train, y_train):
for cls, factor in zip(weakClasses, augmentfactors):
idx = np.where(y_train==cls)
batches = math.ceil(factor * (len(y_train[idx]) / 32 ))
print('working on class {}, original size {}, enlarge by {} in {} batches'.format(cls, len(y_train[idx]), factor, batches))
print()
for X_batch, Y_batch in datagen.flow(X_train[idx], y_train[idx], batch_size=32):
batches -= 1
if batches <= 0:
break
X_batch = X_batch.astype(np.uint8)
X_train = np.append(X_train, X_batch, axis=0)
y_train = np.append(y_train, Y_batch, axis=0)
return X_train, y_train
X_train, y_train = augmentWeakClasses(X_train, y_train)
histogram after augmentation
inspectClassesBinCount()
### Preprocess the data here. Preprocessing steps could include normalization, converting to grayscale, etc.
### Feel free to use as many code cells as needed.
from sklearn.utils import shuffle
import cv2
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114]).astype('uint8')
def histogramNormalized(img):
return cv2.equalizeHist(img)
def meanCentralized(X_train, X_test, X_valid):
image_Mean_Train = np.mean(X_train)
image_Mean_Test = np.mean(X_test)
image_Mean_Valid = np.mean(X_valid)
X_train = (X_train - np.tile(image_Mean_Train, (32,32))) / 180
X_test = (X_test - np.tile(image_Mean_Test, (32,32))) / 180
X_valid = (X_valid - np.tile(image_Mean_Valid, (32,32))) / 180
return X_train, X_test, X_valid
def histogramNormalizeAll(X_train, X_test, X_valid):
for img in X_train:
img = histogramNormalized(img)
for img in X_test:
img = histogramNormalized(img)
for img in X_valid:
img = histogramNormalized(img)
X_train_grayscale = rgb2gray(X_train)
X_test_grayscale = rgb2gray(X_test)
X_valid_grayscale = rgb2gray(X_valid)
histogramNormalizeAll(X_train_grayscale, X_test_grayscale, X_valid_grayscale)
print('before mean normalization, {:.3f}'.format(np.mean(X_train_grayscale)))
X_train_grayscale, X_test_grayscale, X_valid_grayscale = meanCentralized(X_train_grayscale, X_test_grayscale, X_valid_grayscale)
print('after mean normalization, {:.3f}'.format(np.mean(X_train_grayscale)))
X_train_grayscale = np.expand_dims(X_train_grayscale, 3)
X_test_grayscale = np.expand_dims(X_test_grayscale, 3)
X_valid_grayscale = np.expand_dims(X_valid_grayscale, 3)
###### import tensorflow as tf
from tensorflow.contrib.layers import flatten
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 32
rate = 0.001
### Define your architecture here.
### Feel free to use as many code cells as needed.
def leakyRelu(x, alpha=0.01):
return tf.maximum(alpha*x,x)
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
with tf.name_scope("conv1") as scope:
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 64), mean = mu, stddev = sigma), name='conv1_w')
conv1_b = tf.Variable(tf.zeros(64), name='conv1_b')
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='SAME') + conv1_b
# SOLUTION: Activation.
conv1 = leakyRelu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
with tf.name_scope("conv1_maxpool") as scope:
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# conv1 = tf.nn.dropout(conv1, 0.7)
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
with tf.name_scope("conv2") as scope:
conv2_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 64, 128), mean = mu, stddev = sigma), name='conv2_w')
conv2_b = tf.Variable(tf.zeros(128), name='conv2_b')
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = leakyRelu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
with tf.name_scope("conv2_maxpool") as scope:
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 3: Convolutional. Output = 10x10x16.
with tf.name_scope("conv3") as scope:
conv3_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 128, 256), mean = mu, stddev = sigma), name='conv3_w')
conv3_b = tf.Variable(tf.zeros(256), name='conv3_b')
conv3 = tf.nn.conv2d(conv2, conv3_W, strides=[1, 1, 1, 1], padding='VALID') + conv3_b
# SOLUTION: Activation.
conv3 = leakyRelu(conv3)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
# conv3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
with tf.name_scope("flatten") as scope:
fc0 = flatten(conv3)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
with tf.name_scope("fc1") as scope:
fc1_W = tf.Variable(tf.truncated_normal(shape=(6400, 1024), mean = mu, stddev = sigma), name='fc1_w')
fc1_b = tf.Variable(tf.zeros(1024), name='fc1_b')
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = leakyRelu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
with tf.name_scope("fc2") as scope:
fc2_W = tf.Variable(tf.truncated_normal(shape=(1024, 256), mean = mu, stddev = sigma), name='fc2_w')
fc2_b = tf.Variable(tf.zeros(256), name='fc2_b')
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
with tf.name_scope("fc3") as scope:
fc3_W = tf.Variable(tf.truncated_normal(shape=(256, 128), mean = mu, stddev = sigma), name='fc3_w')
fc3_b = tf.Variable(tf.zeros(128), name='fc3_b')
fc3 = tf.matmul(fc2, fc3_W) + fc3_b
# SOLUTION: Activation.
fc3 = leakyRelu(fc3)
fc3 = tf.nn.dropout(fc3, 0.7)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
with tf.name_scope("fc4") as scope:
fc4_W = tf.Variable(tf.truncated_normal(shape=(128, nclasses), mean = mu, stddev = sigma), name='fc4_w')
fc4_b = tf.Variable(tf.zeros(nclasses), name='fc4_b')
logits = tf.matmul(fc3, fc4_W) + fc4_b
return logits
with tf.name_scope('input'):
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, nclasses)
logits = LeNet(x)
vars = tf.trainable_variables()
lossL2 = tf.add_n([ tf.nn.l2_loss(v) for v in vars
if '_b' not in v.name ]) * 0.0003
with tf.name_scope("softmax"):
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y) + lossL2
with tf.name_scope("loss"):
loss_operation = tf.reduce_mean(cross_entropy)
with tf.name_scope("Adam"):
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
with tf.name_scope("training"):
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
with tf.name_scope("accuracy"):
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
loss_summary = tf.summary.scalar("loss", loss_operation)
accuracy_summary = tf.summary.scalar("accuracy", accuracy_operation)
summary_op = tf.summary.merge_all()
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
total_loss = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy, loss = sess.run([accuracy_operation, loss_operation], feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
total_loss += (loss * len(batch_x))
return total_accuracy / num_examples, total_loss / num_examples
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the test set but low accuracy on the validation set implies overfitting.
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
EARLY_STOPPING = 3
def runOnce(X_train_grayscale, y_train):
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter("./tflog24/train", graph=tf.get_default_graph())
num_examples = len(X_train_grayscale)
best_accuracy = 0
best_loss = 99
best_epoch =0
print("Training...")
print()
estopping = EARLY_STOPPING
step_counter = 0
%timeit
for i in range(EPOCHS):
X_train_grayscale, y_train = shuffle(X_train_grayscale, y_train)
for offset in range(0, int(num_examples ), BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train_grayscale[offset:end], y_train[offset:end]
_, summary = sess.run([training_operation, summary_op], feed_dict={x: batch_x, y: batch_y})
step_counter += 1
train_writer.add_summary(summary, step_counter)
training_accuracy, training_loss = evaluate(X_train_grayscale, y_train)
validation_accuracy, validation_loss = evaluate(X_valid_grayscale, y_valid)
print("EPOCH {} ...".format(i+1))
print("Training acu/loss; Validation acu/loss = {:.3f} / {:.3f}; {:.3f} / {:.3f}".format(training_accuracy, training_loss, validation_accuracy, validation_loss))
print()
if i >= 5:
if validation_accuracy > best_accuracy: #and validation_loss < best_loss:
best_accuracy = validation_accuracy
best_loss = validation_loss
best_epoch = i
estopping = EARLY_STOPPING
saver.save(sess, './lenet')
print("best model saved.")
else:
estopping -=1
if estopping <= 0:
print("early stopping...")
print("best epoch {}, best accuracy {}, best loss {}".format(best_epoch, best_accuracy, best_loss))
return best_accuracy
return best_accuracy
accuracy=0
n = 1
for i in range(n):
accuracy += runOnce(X_train_grayscale, y_train)
print('Average accuracy out of {} runs: {:.3f}'.format(n, accuracy / n))
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy, test_loss = evaluate(X_test_grayscale, y_test)
print("Test Accuracy = {:.3f}, Loss = {:.3f}".format(test_accuracy, test_loss))
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import os
import matplotlib.image as mpim
import glob
lables = [0, 5, 21, 22, 23, 28, 29]
def get_im_cv2(path):
img = cv2.imread(path)
resized = cv2.resize(img, (32, 32), cv2.INTER_LINEAR)
return resized
def load_test():
X_test = []
X_test_id = []
for lable in lables:
path = os.path.join('.', 'testing_data', str(lable), '*.jpeg')
files = sorted(glob.glob(path))
for fl in files:
flbase = os.path.basename(fl)
img = get_im_cv2(fl)
X_test.append(img)
X_test_id.append(lable)
return np.array(X_test), X_test_id
X_test_real, X_test_real_id = load_test()
def plotImagesForClasses():
fig, axes = plt.subplots(ncols=8, figsize=(12, 1))
i = 0
for image, idx in zip(X_test_real_grayscale, X_test_real_id):
axes[i].set_xticks([])
axes[i].set_yticks([])
axes[i].set_title(idx)
axes[i].imshow(image, cmap='gray')
i += 1
plt.show()
X_test_real_grayscale = rgb2gray(X_test_real)
print(X_test_real_grayscale.shape)
#X_test_real_grayscale = histogramNormalized(X_test_real_grayscale.astype('uint8'))
print('before mean normalization, {:.3f}'.format(np.mean(X_test_real_grayscale)))
image_Mean_Train = np.mean(X_test_real_grayscale)
X_test_real_grayscale = (X_test_real_grayscale - np.tile(image_Mean_Train, (32,32))) / 180
print('after mean normalization, {:.3f}'.format(np.mean(X_test_real_grayscale)))
plotImagesForClasses()
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
top_5_operation = tf.nn.top_k(tf.nn.softmax(logits), k=5)
def evaluate_with_cross_entropy(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
total_loss = 0
sess = tf.get_default_session()
accuracy, loss, cross_entropy_values, cross_entropy_ids = sess.run([accuracy_operation, loss_operation, top_5_operation[0], top_5_operation[1]], feed_dict={x: X_test_real_grayscale.reshape([-1,32,32,1]), y: y_data})
total_accuracy += (accuracy * len(X_test_real_grayscale))
total_loss += (loss * len(X_test_real_grayscale))
return total_accuracy / num_examples, total_loss / num_examples, cross_entropy_ids, cross_entropy_values
def inspectTestingImages(r,p):
fig_size = plt.rcParams["figure.figsize"]
# Set figure width to 12 and height to 9
fig_size[0] = 6
fig_size[1] = 3
plt.rcParams["figure.figsize"] = fig_size
index = np.arange(nclasses)
plt.bar(index, label='real', width=0.45, height= r, color='b')
plt.bar(index + 0.45, width=0.45, label='predicted', height= p, color='g')
plt.title('Testing image real label vs predictions')
plt.xlabel('label')
plt.ylabel('Confidence')
plt.legend()
plt.show()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy, test_loss, cross_entropy_ids, cross_entropy_values = evaluate_with_cross_entropy(X_test_real_grayscale.reshape([-1,32,32,1]), X_test_real_id)
print("Test Accuracy = {:.3f}, Loss = {:.3f}".format(test_accuracy, test_loss))
print("Top 5 cross entropy")
i = 0
for ids, ps in zip(cross_entropy_ids, cross_entropy_values):
real = np.zeros(nclasses)
pred = np.zeros(nclasses)
real[X_test_real_id[i]] = 1
for j in range(5):
pred[ids[j]] = ps[j]
print("# {}, real label {}".format(i, X_test_real_id[i]))
inspectTestingImages(real, pred)
i += 1